Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Dynamic multi-objective optimization algorithm based on weight vector clustering
Erchao LI, Yanli CHENG
Journal of Computer Applications    2023, 43 (7): 2226-2236.   DOI: 10.11772/j.issn.1001-9081.2022060843
Abstract183)   HTML4)    PDF (3030KB)(61)       Save

There are many Dynamic Multiobjective Optimization Problems (DMOPs) in real life. For such problems, when the environment changes, Dynamic Multi-Objective Evolutionary Algorithm (DMOEA) is required to track the Pareto Front (PF) or Pareto Set (PS) quickly and accurately under the new environment. Aiming at the problem of poor performance of the existing algorithms on population prediction, a dynamic multi-objective optimization algorithm based on Weight Vector Clustering Prediction (WVCP) was proposed. Firstly, the uniform weight vectors were generated in the target space, and the individuals in the population were clustered. According to the clustering results, the distribution of the population was analyzed. Secondly, a time series was established for the center points of clustered individuals. For the same weight vector, the corresponding coping strategies were adopted to supplement individuals according to different clustering situations. If there were cluster centers at all adjacent moments, the difference model was used to predict individuals in the new environment. If there was no cluster center at a certain moment, the centroid of the cluster centers of adjacent weight vectors was used as the cluster center at that moment, and then the difference model was used to predict individuals. In this way, the problem of poor population distribution was solved effectively, and the accuracy of prediction was improved at the same time. Finally, the introduction of individual supplement strategy was beneficial to make full use of historical information. In order to verify the performance of the proposed algorithm, simulation comparison of this algorithm and four representative algorithms was carried out. Experimental results show that the proposed algorithm can solve DMOPs well.

Table and Figures | Reference | Related Articles | Metrics
Cross-domain person re-identification method based on attention mechanism with learning intra-domain variance
Daili CHEN, Guoliang XU
Journal of Computer Applications    2022, 42 (5): 1391-1397.   DOI: 10.11772/j.issn.1001-9081.2021030459
Abstract322)   HTML15)    PDF (2210KB)(274)       Save

To solve severe performance degradation problem of person re-identification task during cross-domain migration, a new cross-domain person re-identification method based on attention mechanism with learning intra-domain variance was proposed. Firstly, ResNet50 was used as the backbone network and some modifications were made to it, so that it was more suitable for person re-identification task. And Instance-Batch Normalization Network (IBN-Net) was introduced to improve the generalization ability of model. At the same time, for the purpose of learning more discriminative features, a region attention branch was added to the backbone network. For the training of source domain, it was treated as a classification task. Cross-entropy loss was utilized for supervised learning of source domain, and triplet loss was introduced to mine the details of source domain samples and improve the classification performance of source domain. For the training of target domain, intra-domain variance was considered to adapt the difference in data distribution between the source domain and the target domain. In the test phase, the output of ResNet50 pool-5 layer was used as image features, and Euclidean distance between query image and candidate image was calculated to measure the similarity of them. In the experiments on two large-scale public datasets of Market-1501 and DukeMTMC-reID, the Rank-1 accuracy of the proposed method is 80.1% and 67.7% respectively, and its mean Average Precision (mAP) is 49.5% and 44.2% respectively. Experimental results show that, the proposed method has better performance in improving generalization ability of model.

Table and Figures | Reference | Related Articles | Metrics
One-shot video-based person re-identification with multi-loss learning and joint metric
Yuchang YIN, Hongyuan WANG, Li CHEN, Zundeng FENG, Yu XIAO
Journal of Computer Applications    2022, 42 (3): 764-769.   DOI: 10.11772/j.issn.1001-9081.2021040788
Abstract288)   HTML9)    PDF (710KB)(99)       Save

In order to solve the problem of huge labeling cost for person re-identification, a method of one-shot video-based person re-identification with multi-loss learning and joint metric was proposed. Aiming at the problem that the number of label samples is small and the model obtained is not robust enough, a Multi-Loss Learning (MLL) strategy was proposed. In each training process, different loss functions were used for different data to optimize and improve the discriminative ability of the model. Secondly, a Joint Distance Metric (JDM) was proposed for label estimation, which combined the sample distance and the nearest neighbor distance to further improve the accuracy of pseudo label prediction. JDM solved the problems of the low accuracy of label estimation for unlabeled data, and the instability in the training process caused by the unlabeled data not fully utilized. Experimental results show that compared with the one-shot progressive learning method PL (Progressive Learning), the rank-1 accuracy reaches 65.5% and 76.2% on MARS and DukeMTMC-VideoReID datasets when the ratio of pseudo label samples added per iteration is 0.10, with the improvement of the proposed method of 7.6 and 5.2 percentage points, respectively.

Table and Figures | Reference | Related Articles | Metrics
Single direction projected Transformer method for aliasing text detection
Zhida FENG, Li CHEN
Journal of Computer Applications    2022, 42 (12): 3686-3691.   DOI: 10.11772/j.issn.1001-9081.2021101749
Abstract326)   HTML23)    PDF (2574KB)(149)       Save

To address the performance degradation of segmentation-based text detection methods in aliasing text scenes, a Single Direction Projected Transformer (SDPT) was proposed for aliasing text detection. Firstly, multi-scale features were extracted and fused by using deep Residual Network (ResNet) and Feature Pyramid Network (FPN). Then, the feature map was projected into a vector sequence by using horizontal projection and was fed into the Transformer module to model, thereby mining the relationship between the lines of text. Finally, joint optimization was performed using multiple objectives. Extensive experiments were conducted on the synthetic dataset BDD-SynText and the real dataset RealText. The results show that the proposed SDPT achieves optimal effect for text detection with high aliasing level, and improves F1-Score (IoU75) by at least 21. 36 percentage points on BDD-SynText and 18.11 percentage points on RealText compared with the state-of-the-art text detection algorithms such as Progressive Scale Expansion Network (PSENet) under the same backbone network (ResNet50), verifying the important role of the proposed method for performance improvement in aliasing text detection.

Table and Figures | Reference | Related Articles | Metrics
Short-term trajectory prediction model of aircraft based on attention mechanism and generative adversarial network
Yuli CHEN, Qiang TONG, Tongtong CHEN, Shoulu HOU, Xiulei LIU
Journal of Computer Applications    2022, 42 (10): 3292-3299.   DOI: 10.11772/j.issn.1001-9081.2021081387
Abstract463)   HTML19)    PDF (1549KB)(265)       Save

Single Long Short-Term Memory (LSTM) network cannot effectively extract key information and cannot accurately fit data distribution in trajectory prediction. In order to solve the problems, a short-term trajectory prediction model of aircraft based on attention mechanism and Generative Adversarial Network (GAN) was proposed. Firstly, different weights were assigned to the trajectory by introducing attention mechanism, so that the influence of important features in the trajectory was able to be improved. Secondly, the trajectory sequence features were extracted by using LSTM, and the convergence net was used to gather all aircraft features within the time step. Finally, the characteristic of GAN optimizing continuously in adversarial game was used to optimize the model in order to improve the model accuracy. Compared with Social Generative Adversarial Network (SGAN), the proposed model has the Average Displacement Error (ADE), Final Displacement Error (FDE) and Maximum Displacement Error (MDE) reduced by 20.0%, 20.4% and 18.3% respectively on the dataset during climb phase. Experimental results show that the proposed model can predict future trajectories more accurately.

Table and Figures | Reference | Related Articles | Metrics
Binary classification to multiple classification progressive detection network for aero-engine damage images
FAN Wei, LI Chenxuan, XING Yan, HUANG Rui, PENG Hongjian
Journal of Computer Applications    2021, 41 (8): 2352-2357.   DOI: 10.11772/j.issn.1001-9081.2020101575
Abstract362)      PDF (1589KB)(392)       Save
Aero-engine damage is an important factor affecting flight safety. There are two main problems in the current computer vision-based damage detection of engine borescope image:one is that the complex background of borescope image makes the model detect the damage with low accuracy; the other one is that the data source of borescope image is limited, which leads to fewer detectable classes for the model. In order to solve these two problems, a Mask R-CNN (Mask Region-based Convolutional Neural Network) based progressive detection network from binary classification to multiple classification was proposed for aero-engine damage images. By adding a binary classification detection branch to the Mask R-CNN, firstly, the damage in the image was detected in binary way and regression optimization was performed to the localization coordinates. Secondly, the original detection branch was used to progressively perform multiple classification detection, so as to further optimize the damage detection results by regression and determine the damage class. Finally, instance segmentation was performed to the damage through the Mask branch according to the results of multiple classification detection. In order to increase the detection classes of the model and verify the effectiveness of the method, a dataset of 1 315 borescope images with 8 damage classes was constructed. The training and testing results on this set show that the Average Precision (AP) and AP75 (Average Precision under IoU (Intersection over Union) of 75%) of multiple classification detection are improved by 3.34% and 9.71% respectively, compared with those of Mask R-CNN. It can be seen that the proposed method can effectively improve the multiple classification detection accuracy for damages in borescope images.
Reference | Related Articles | Metrics
Network security situation prediction based on improved particle swarm optimization and extreme learning machine
TANG Yanqiang, LI Chenghai, SONG Yafei
Journal of Computer Applications    2021, 41 (3): 768-773.   DOI: 10.11772/j.issn.1001-9081.2020060924
Abstract383)      PDF (1076KB)(623)       Save
Focusing on the problems of low prediction accuracy and slow convergence speed of network security situation prediction model, a prediction method based on Improved Particle Swarm Optimization Extreme Learning Machine (IPSO-ELM) algorithm was proposed. Firstly, the inertia weight and learning factor of Particle Swarm Optimization (PSO) algorithm were improved to realize the adaptive adjustment of the two parameters with the increase of iteration times, so that PSO had a large search range and fast speed at the initial stage, strong convergence ability and stability at the later stage. Secondly, aiming at the problem that PSO is easy to fall into the local optimum, a particle stagnation disturbance strategy was proposed to re-guide the particles trapped in the local optimum to the global optimal flying. The Improved Particle Swarm Optimization (IPSO) algorithm obtained in this way ensured the global optimization ability and enhanced the local search ability. Finally, IPSO was combined with Extreme Learning Machine (ELM) to optimize the initial weights and thresholds of ELM. Compared with ELM, the ELM combining with IPSO had the prediction accuracy improved by 44.25%. Experimental results show that, compared with PSO-ELM, IPSO-ELM has the fitting degree of prediction results reached 0.99, and the convergence rate increased by 47.43%. The proposed algorithm is obviously better than the comparison algorithms in the prediction accuracy and convergence speed.
Reference | Related Articles | Metrics
Audio watermarking algorithm in MP3 compressed domain based on low frequency energy ratio of channels
LI Chen, WANG Kexin, TIAN Lihua
Journal of Computer Applications    2018, 38 (8): 2301-2305.   DOI: 10.11772/j.issn.1001-9081.2018020298
Abstract469)      PDF (966KB)(313)       Save
For the inefficiency and imbalance between robustness and imperceptibility of most of the current audio watermarking algorithms when applied to MP3 audio, a watermarking algorithm in compressed domain based on low frequency energy of channels of MP3 frames was proposed. The watermarking can be embedded and extracted during MP3 compression and decompression processes, which greatly enhances the efficiency. Considering the good stability of low frequency energy, the low frequency energy of channels was calculated by using Modified Discrete Cosine Transform (MDCT) coefficients produced in MP3 encoding and decoding processes, then the ratio between the energy of left and right channels was quantized with fixed step, and the watermarking was embedded by modifying some MDCT coefficients according to quantified results. Meanwhile, with the proportion of energy in different scalefactor bands, the embedding bands were selected before calculating low frequency energy of channels, which ensured a good balance between robustness and imperceptibility. The experimental results show that the proposed algorithm has a good robustness against various types of attacks while maintaining the original audio quality, especially against MP3 recompression attacks.
Reference | Related Articles | Metrics
Vulnerability threat assessment based on improved variable precision rough set
JIANG Yang, LI Chenghai
Journal of Computer Applications    2017, 37 (5): 1353-1356.   DOI: 10.11772/j.issn.1001-9081.2017.05.1353
Abstract654)      PDF (623KB)(420)       Save
Variable Precision Rough Set (VPRS) can effectively process the noise data, but its portability is not good. Aiming at this problem, an improved vulnerability threat assessment model was proposed by introducing the threshold parameter α. First of all, an assessment decision table was created according to characteristic properties of vulnerability. Then, k-means algorithm was used to discretize the continuous attributes. Next, by adjusting the value of β and α, the attributes were reducted and the probabilistic decision rules were concluded. Finally, the test data was matched with the rule base and the vulnerability assessment results were obtained. The simulation results show that the accuracy of the proposed method is 19.66 percentage points higher than that of VPRS method, and the transplantability is enhanced.
Reference | Related Articles | Metrics
Improved geodesic active contour image segmentation model based on Bessel filter
LIU Guoqi, LI Chenjing
Journal of Computer Applications    2017, 37 (12): 3536-3540.   DOI: 10.11772/j.issn.1001-9081.2017.12.3536
Abstract437)      PDF (982KB)(618)       Save
Active contour model is widely used in image segmentation and object contour extraction, and the edge-based Geodesic Active Contour (GAC) model is widely used in the object extraction with obvious edges. But the process of GAC evolution costs many iterations and long time. In order to solve the problems, the GAC model was improved with Bessel filter theory. Firstly, the image was smoothed by Bessel filter to reduce the noise. Secondly, a new edge stop term was constructed based on the edge detection function of Bessel filter and incorporated into the GAC model. Finally, the Reaction Diffussion (RD) term was added to the constructed model for avoiding re-initialization of the level set. The experimental results show that, compared with several edge-based models, the proposed model improves the time efficiency and ensures the accuracy of segmentation results. The proposed model is more suitable for practical applications.
Reference | Related Articles | Metrics
Outsourced data encryption scheme with access privilege revocation
LI Chengwen, WANG Xiaoming
Journal of Computer Applications    2016, 36 (1): 216-221.   DOI: 10.11772/j.issn.1001-9081.2016.01.0216
Abstract500)      PDF (958KB)(350)       Save
The scheme proposed by Zhou et al. (ZHOU M, MU Y, SUSILO W, et al. Privacy enhanced data outsourcing in the cloud. Journal of network and computer applications, 2012, 35(4): 1367-1373) was analyzed, and the shortcoming of no access privilege revocation was shown. To address the shortcoming, an outsourced data encryption scheme with revoking access privilege was proposed. Firstly, the data were divided into several data blocks, and each data block was encrypted separately. Secondly, with the key derivation method, the number of keys stored and managed by the data owner was reduced. Finally, multiple decryption keys were constructed on an encrypted data to revoke access privileges of some users, without affecting the legitimate users. Compared with Zhou's scheme, the proposed scheme not only maintains the advantage of privacy protection to the outsourced data in the scheme, but also realizes access privilege revocation for users. The analysis results show that the proposed scheme is secure under the assumption of the Discrete Logarithm Problem (DLP).
Reference | Related Articles | Metrics
One projection subspace pursuit for signal reconstruction in compressed sensing
LIU Xiaoqing LI Youming LI Chengcheng JI Biao CHEN Bin ZHOU Ting
Journal of Computer Applications    2014, 34 (9): 2514-2517.   DOI: 10.11772/j.issn.1001-9081.2014.09.2514
Abstract248)      PDF (606KB)(440)       Save

In order to reduce the complexity of signal reconstruction algorithm, and reconstruct the signal with unknown sparsity, a new algorithm named One Projection Subspace Pursuit (OPSP) was proposed. Firstly, the upper and lower bounds of the signal's sparsity were determined based on the restricted isometry property, and the signal's sparsity was set as their integer middle value. Secondly, under the frame of Subspace Pursuit (SP), the projection of the observation onto the support set in each iteration process was removed to decrease the computational complexity of the algorithm. Furthermore, the whole signal's reconstruction rate was used as the index of reconstruction performance. The simulation results show that the proposed algorithm can reconstruct the signals of unknown sparsity with less time and higher reconstruction rate compared with the traditional SP algorithm, and it is effective for signal reconstruction.

Reference | Related Articles | Metrics
Speed adaptive vertical handoff algorithm based on application requirements
TAO Yang JIANG Yanli CHEN Leicheng
Journal of Computer Applications    2014, 34 (5): 1236-1238.   DOI: 10.11772/j.issn.1001-9081.2014.05.1236
Abstract433)      PDF (588KB)(469)       Save

Next Generation Network (NGN) is an integrative network which uses different radio access technologies. In this converged network environment, vertical handoff between different wireless access technologies becomes an important research topic. However, most of vertical handoff algorithms do not think about the actual demands of network and the mobility of user, but taking network properties as the standards of judgment. In order to solve the problem above, a speed adaptive vertical handoff algorithm based on application requirements was proposed, which used the speed factor and network propertise matrix to compensate for the quality loss of wireless link caused by mobility, which adaptively adjusted the weights of network properties that the application needs and supported node to make effective decisions. This algorithm realized vertical handoff with adaptive speed which better served the application and . Simulation results show that the proposed algorithm can overcome the ping-pang effect effectively and it has higher packet throughput in comparison with the other vertical handoff algorithms.

Reference | Related Articles | Metrics
Corner detection algorithm using correlation matrix of Gabor directional derivatives
ZHU Zhanli CHEN Yuxin
Journal of Computer Applications    2013, 33 (10): 2902-2906.  
Abstract628)      PDF (887KB)(521)       Save
To improve the accuracy of the corner detection, a new corner detection algorithm was proposed, and it used the Gabor directional derivatives of each pixel to construct the correlation matrix on edge contour.to detect corner. The algorithm firstly extracted the edge map of an image using the Canny edge detector; secondly, the image was smoothed by the Gabor filters and the correlation matrixes were constructed using Gabor directional derivatives of each edge pixel and its surrounding pixels. If the sum of the normalized eigenvalues was not only above the previously specified threshold but also the local maximum, it would be labeled as a corner. Compared with the traditional contour-based corner detection algorithm, it used the related information of the Gabor directional derivatives of the edge pixel and its surrounding pixels to extract corners, hence achieving better robustness to noise. The experimental results indicate that: The proposed algorithm detects more matched corners and fewer false corners in the noise free and noisy cases and achieves obvious improvement in performance.
Related Articles | Metrics
Image retrieval based on color and motif characteristics
YU Sheng XIE Li CHENG Yun
Journal of Computer Applications    2013, 33 (06): 1674-1708.   DOI: 10.3724/SP.J.1087.2013.01674
Abstract771)      PDF (588KB)(719)       Save
In order to improve image retrieval performance, this paper proposed a new image retrieval algorithm based on motif and color features. The color image edge gradient was detected, and by means of edge gradient image transform, a motif image was obtained. Adopting the gravity center of motif image as the datum point, the distances of all points were calculated to the datum point to get the motif center distance histogram. The all motifs of the motif image were projected in four different directions to get motif projective histogram. Color image was uniformly quantized into 64-color space from RGB space to obtain the color histogram. The above three histograms described image features for image retrieval. The experimental results show that the algorithm has high precision and recall.
Reference | Related Articles | Metrics
Data-modeling and implementation for massive construction project data based on manageable entity-oriented object
LI Chenghua JIANG Xiaoping XIANG Wen LI Bin
Journal of Computer Applications    2013, 33 (04): 1010-1014.   DOI: 10.3724/SP.J.1087.2013.01010
Abstract707)      PDF (762KB)(444)       Save
For the requirements of building Project Information Portal (PIP) data center based on a unified data model, a manageable entity object-oriented data model was proposed. The project data were treated as a series of managerial entity based on management workflows which were decomposed according to the whole life cycle. The conceptual layer data model was designed. The project data could be naturally represented and recorded by using this model. The data organization method was presented based on MongoDB (document-oriented database technology). The cluster storage architecture for PIP was also addressed. The experiments show that it has efficient performance in data writing and querying. It also has high availability and storage capacity scalability.
Reference | Related Articles | Metrics
Performance analysis and improvement of forward error correction encoder in G3-PLC
WU Xiaomeng LIU Hongli LI Cheng GU Zhiru
Journal of Computer Applications    2013, 33 (02): 393-396.   DOI: 10.3724/SP.J.1087.2013.00393
Abstract1017)      PDF (595KB)(353)       Save
To solve the problems of single and low rate of convolutional codes and large loss of data rate in the G3 standard, the low voltage power line carrier communication system model based on Orthogonal Frequency Division Multiplexing (OFDM) in the G3 standard was analyzed, and a designing scheme of forward error correction encoder was presented based on RS encoding, convolutional encoding, puncturing and depuncturing, repetition encoding and two dimensional time and frequency interleaving algorithm. Moreover, a method for raising the code rate by puncturing and depuncturing was mainly introduced. The simulation results show that the rate of convolutional codes is raised from 1/2 to 2/3, the data rate is improved without increasing the complexity of decoding, and the effective and reliable communication can be realized, which means the scheme can be widely used in low voltage Power Line Communication (PLC).
Related Articles | Metrics
Image denoising based on Riemann-Liouville fractional integral
HUANG Guo XU Li CHEN Qingli PU Yifei
Journal of Computer Applications    2013, 33 (01): 35-39.   DOI: 10.3724/SP.J.1087.2013.00035
Abstract1183)      PDF (926KB)(627)       Save
To preserve more image texture information while obtaining better denoising performance, the Riemann-Liouville (R-L) fractional integral operator was described in signal processing. The R-L fractional integral theory was introduced into the digital image denoising, and the method of ladder approximation was used to achieve numerical calculation. The model constructed the corresponding mask of image denoising by setting a tiny integral order to achieve local fine-tuning of noise image, and it could control the effect of image denoising by the way of iteration to get better denoising results. The experimental results show that, compared with the traditional image denoising algorithms, the image denoising algorithm based on R-L fractional integral proposed in this paper can enhance the Signal-to-Noise Ratio (SNR) of image, the SNR of denoising image with the algorithm proposed in this paper can reach 18.3497dB, and the lowest growth rate compared to the traditional denoising algorithms increases about 4%. In addition, the proposed algorithm can better retain weak image edge and texture details information of image.
Reference | Related Articles | Metrics
Event-B interpretation for space aircraft description language model
QI Yan-xia SHEN Hui-li CHEN Zhao-hui GU Bin
Journal of Computer Applications    2012, 32 (12): 3525-3528.   DOI: 10.3724/SP.J.1087.2012.03525
Abstract848)      PDF (683KB)(430)       Save
A requirement modeling language called SPARDL was proposed for modeling and analyzing such periodic control systems consisting of periodic behaviors together with the mode transition mechanism. The Event-B interpretation was specified for the SPARDL model. The semantics of SPARDL were presented by Event-B and a refinement framework was introduced to develop the Event-B models based on the features of the SPARDL model. Finally, a case study was analyzed to show the effectiveness of our proposed approach to modeling and validation of the SPARDL model by Event-B.
Related Articles | Metrics
Linear model for blind evaluation of image scrambling degree based on difference statistic distribution
WANG Cong-li CHEN Zhi-bin XUE Ming-xi ZHANG Chao
Journal of Computer Applications    2012, 32 (12): 3470-3473.   DOI: 10.3724/SP.J.1087.2012.03470
Abstract685)      PDF (616KB)(468)       Save
Most of the current approaches to evaluate the degree of image scrambling depend on original images. And there are no scientific mathematical models as their theoretic basis. A linear model for difference statistic distribution of ideal scrambled image was put forward in this paper by analyzing the difference statistic distribution of scrambled image. Furthermore,three methods were presented based on this model to evaluate image scrambling degree. The first one was the absolute difference of slope, the second was the absolute difference of difference, and the third was method of overlapping area. The experimental results indicate that these methods are very sensitive to the statistical distribution of image difference, and they are independent of original image with good agreement with human vision system, so they can achieve blind evaluation for image scrambling degree objectively.
Related Articles | Metrics
Location algorithm based on BP neural network in OFDM system
MAO Yong-yi LI Cheng ZHANG Hong-jun
Journal of Computer Applications    2012, 32 (09): 2426-2428.   DOI: 10.3724/SP.J.1087.2012.02426
Abstract1094)      PDF (433KB)(603)       Save
For the purpose of reducing multi-path interference for positioning accuracy in Orthogonal Frequency Division Multiplexing (OFDM) systems, a location algorithm based on Back Propagation (BP) neural networks was proposed. MUltiple SIgnal Classification (MUSIC) algorithm was used to estimate the Time Of Arrival (TOA) of the first arrival path and calculate the Time Difference Of Arrival (TDOA). Then BP neural network was used to correct the TDOA. Finally Chan algorithm was used to determine the location of the mobile station. The location algorithm was simulated in multi-path environment. The simulation results show that this algorithm can effectively reduce the effect of the multi-path interference and the performance is better than Least Square (LS) algorithm, Chan algorithm and Taylor algorithm.
Reference | Related Articles | Metrics
Hadoop-based storage architecture for mass MP3 files
ZHAO Xiao-yong YANG Yang SUN Li-li CHEN Yu
Journal of Computer Applications    2012, 32 (06): 1724-1726.   DOI: 10.3724/SP.J.1087.2012.01724
Abstract1004)      PDF (431KB)(783)       Save
MP3 as the de facto standard for digital music, the number of files is quite large and user access requirements rapidly grow up. How to effectively store and manage vast amounts of MP3 files to provide good user experience has become a matter of bigger concern. The emergence of Hadoop provides us with new ideas. However, because Hadoop itself is not suitable for handling massive small files, this paper presented a Hadoop-based storage architecture for massive MP3 files, fullly using the MP3 file’s rich metadata. The classification algorithm by pre-processing module would merged small files into sequence file, and the introduction of efficient indexing mechanism served as a good solution to the problem of small files. The experimental results show that the approach can achieve a better performance.
Related Articles | Metrics
Cross-layer resource allocation algorithm of MIMO-OFDM systems with partial channel state information
HUANG Yu-qing LI Cheng-xin LI Qiang
Journal of Computer Applications    2012, 32 (05): 1211-1216.  
Abstract1127)      PDF (2936KB)(738)       Save
Cross-layer design is an effective technique for future mobile communication systems. A cross-layer resource allocation algorithm with partial channel state information was explored to maximize the total system throughput for multi-user MIMO-OFDM (Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing) system. The objective function of the optimization problem was designed based on the power limitation constraint, transmission rate, average queue length and sub-carrier occupancy, Quality of Service (QoS) requirements of different services and queue state information of data link layer. Under the condition of finite-length user buffer in data link layer, the mean feedback model was utilized to describe the feedback process of channel state information, and then the corresponding cross-layer resource allocation criteria could be derived. The simulation results show that compared with the existing schemes, the proposed algorithms obtain reasonable throughput performance and reduce lost package rate while providing better QoS requirement for each user of different services.
Reference | Related Articles | Metrics
New method for extracting region of interest of dynamic PET images based on curve clustering
TIAN Ping-ping LIU Li CHEN Yu-ting
Journal of Computer Applications    2012, 32 (02): 535-550.   DOI: 10.3724/SP.J.1087.2012.00535
Abstract825)      PDF (581KB)(359)       Save
Concerning the problem that many current clustering methods based on kinetic characteristics ignore the continuous temporal information of Time Activity Curve (TAC), a method for Region Of Interest (ROI) extraction based on curve clustering was proposed in the paper. The proposed method contains three steps. Firstly, K-Means algorithm was used to remove the background to obtain a coarse mask of the heart. Secondly, curve clustering was used to extract myocardium from the heart obtained in the first step. Finally, blood cavity was delineated based on spatial relationship between pixels. The method was applied to extract the ROI from fourteen mouse PET images. The experimental results indicate that the proposed method is more accurate in delineating blood cavity of the fourteen mice than K-Means and Hybrid Clustering Method (HCM), and it is more precise and stable.
Related Articles | Metrics
Incentive strategy based on Bayesian game model in wireless multi-hop network
XU Li CHEN Xin-yu CHEN Zhi-de
Journal of Computer Applications    2011, 31 (12): 3169-3173.  
Abstract1155)      PDF (984KB)(1288)       Save
The enhancement of nodes intelligence causes more applications for the wireless multi-hop networks. The security problem also becomes more crucial. In order to prevent the adverse effects of the selfish nodes or malicious nodes, this paper proposes a cross-layer mechanism based on the game theory. A Bayesian game model is developed for the information sharing between the physical layer and the link layer. Apply the Bayesian game model to derive and analyze the mutual information among nodes and form an effective mutual supervision incentive for the node cooperation. The effectiveness of the proposed Bayesian game model is shown through careful case studies and comprehensive computer simulation.
Reference | Related Articles | Metrics
Naive Bayesian text classification algorithm in cloud computing environment
JIANG Xiao-ping LI Cheng-hua XIANG Wen ZHANG Xin-fang
Journal of Computer Applications    2011, 31 (09): 2551-2554.   DOI: 10.3724/SP.J.1087.2011.02551
Abstract1915)      PDF (667KB)(691)       Save
The major procedures of text classification such as uniform text format expression, training, testing and classifying based on Naive Bayesian text classification algorithm were implemented using MapReduce programming mode. The experiments were given in Hadoop cloud computing environment. The experimental results indicate basically linear speedup with an increasing number of node computers. A recall rate of 86% was achieved when classifying Chinese Web pages.
Related Articles | Metrics
Adaptive subcarrier allocation of multiuser STBC-OFDM systems in correlated channels
Qiang LI Cheng-xin LI Yu-qing HUANG Yuan-cheng YAO
Journal of Computer Applications    2011, 31 (07): 1948-1951.   DOI: 10.3724/SP.J.1087.2011.01948
Abstract1255)      PDF (716KB)(774)       Save
With the optimization goal of minimizing the total transmit power, an adaptive subcarrier allocation algorithm based on partial Channel State Information (CSI) under the condition of spatially correlated Rayleigh fading channels was proposed for multiuser STBC-OFDM downlink systems. In the course of algorithm implementation, the Kronecker model was used to express spatially correlated Multiple-Input Multiple-Out-put (MIMO) rayleigh fading channels of each subcarrier, and the dynamic CSIT (CSI at the Transmit) model was utilized to describe the process of CSI feedback; thus, the corresponding subcarrier allocation criteria could be deduced by means of the basic principles of Space-Time Block Code (STBC). The experimental results show that the proposed algorithm not only can effectively reflect system performance effects of the correlation coefficients of antenna correlation matrix and the parameters of delayed feedback, but also has good performance in contrast to subcarrier allocation without CSIT.
Reference | Related Articles | Metrics
Recognition of splice sites based on fuzzy support vector machine
Bo SUN Xiao-xia LI Cheng-guo LI
Journal of Computer Applications    2011, 31 (04): 1117-1120.   DOI: 10.3724/SP.J.1087.2011.01117
Abstract1169)      PDF (592KB)(415)       Save
In order to improve the splice site recognition accuracy of Fuzzy Support Vector Machine (FSVM), a new method for computing the membership degree of sample was proposed. The initial membership was defined as the distance ratio of the sample to the two cluster centers of positive and negative samples, K-Nearest Neighbor (KNN) was adopted to compute the tightness of the samples, and the multiplication of the tightness and the initial membership degree was used as the ultimate membership. It will not only improve the membership degree of support vector, but also reduce the membership degree of noise sample. This method was applied to recognize the splice site, and the experimental results show that the recognition accuracy of constitutive 5′ and 3′ splice site reaches 94.65% and 88.97% respectively. Compared with the classical support vector machine,the recognition accuracy of constitutive 3′ splice site increases by 7.94%.
Related Articles | Metrics
General model for dynamic contract net protocol based on object-oriented Petri net
Dan LI Li CHEN Gong-li LI
Journal of Computer Applications   
Abstract1542)      PDF (527KB)(898)       Save
The traditional contract net protocol model which works on bids invitation between a manager Agent and contractor Agents can successfully realize the cooperation and complete the goal task among Agents. But it also faces many problems such as the high traffic and weak general availability. Therefore, the ObjectOriented Petri net and the concept of object Agent were used to describe the dynamic contract net protocol. The analysis shows that this new model is live, concurrent and can lighten the traffic and enhance the general availability.
Related Articles | Metrics
Shanghai city spatial information system for application and service based on spatial information grid
Bai-Lang YU Jian-Ping WU Ai-Li CHEN Da-Jun QIAN
Journal of Computer Applications   
Abstract1682)      PDF (1046KB)(594)       Save
The architecture, technologies and implementation of the Shanghai city spatial information system for application and service, a SIG (Spatial Information Grid) based platform was introduced. In the system, there were ten categories of spatial information resources, including city planning, landuse, real estate, river system, transportation, municipal facility construction, environmental protection, sanitation, urban afforestation and basic geographic information data. In addition, spatial information processing services were offered as means of GIS Web Services. The resources and services were all distributed in different Webbased nodes. A single database was created to store the metadata of all the spatial information. A portal site was published as the main user interface of the system. There are three main functions in the portal site. First, users can search the metadata and acquire the distributed data by using the searching results. Second, some spatial processing Web applications developed with GIS Web Services, such as file format conversion, spatial coordinate transfer, cartographic generalization and spatial analysis are offered. Third, GIS Web Services currently available in the system can be searched and new ones can be registered. The system has been working efficiently in Shanghai government network since 2005.
Related Articles | Metrics